2 research outputs found

    EXTRACTION AND PREDICTION OF SYSTEM PROPERTIES USING VARIABLE-N-GRAM MODELING AND COMPRESSIVE HASHING

    Get PDF
    In modern computer systems, memory accesses and power management are the two major performance limiting factors. Accesses to main memory are very slow when compared to operations within a processor chip. Hardware write buffers, caches, out-of-order execution, and prefetch logic, are commonly used to reduce the time spent waiting for main memory accesses. Compiler loop interchange and data layout transformations also can help. Unfortunately, large data structures often have access patterns for which none of the standard approaches are useful. Using smaller data structures can significantly improve performance by allowing the data to reside in higher levels of the memory hierarchy. This dissertation proposes using lossy data compression technology called ’Compressive Hashing’ to create “surrogates”, that can augment original large data structures to yield faster typical data access. One way to optimize system performance for power consumption is to provide a predictive control of system-level energy use. This dissertation creates a novel instruction-level cost model called the variable-n-gram model, which is closely related to N-Gram analysis commonly used in computational linguistics. This model does not require direct knowledge of complex architectural details, and is capable of determining performance relationships between instructions from an execution trace. Experimental measurements are used to derive a context-sensitive model for performance of each type of instruction in the context of an N-instruction sequence. Dynamic runtime power prediction mechanisms often suffer from high overhead costs. To reduce the overhead, this dissertation encodes the static instruction-level predictions into a data structure and uses compressive hashing to provide on-demand runtime access to those predictions. Genetic programming is used to evolve compressive hash functions and performance analysis of applications shows that, runtime access overhead can be reduced by a factor of ~3x-9x

    Processor Microarchitecture for Implementation of Ephemeral State Processing within Network Routers

    Get PDF
    The evolving concept of Ephemeral State Processing (ESP) is overviewed. ESP allows development of new scalable end-to-end network user services. An evolving macro-level language is being developed to support ESP at the network node level. Three approaches for implementing ESP services at network routers can be considered. One approach is to use the existing processing capability within commercially available network routers. Another approach is to add a small scale existing ASIC based general-purpose processor to an existing network router. This thesis research concentrates on a third approach of developing a special-purpose programmable Ephemeral State Processor (ESPR) Instruction Set Architecture (ISA) and implementing microarchitecture for deployment within each ESP-capable node to implement ESP service within that node. A unique architectural characteristic of the ESPR is its scalable and temporal Ephemeral State Store (ESS) associative memory, required by the ESP service for storage/retrieval of bounded (short) lifetime ephemeral (tag, value) pairs of application data. The ESPR will be implemented to Programmable Logic Device (PLD) technology within a network node. This offers advantages of reconfigurability, in-field upgrade capability and supports the evolving growth of ESP services. Correct functional and performance operation of the presented ESPR microarchitecture is validated via Hardware Description Language (HDL) post-implementation (virtual prototype) simulation testing. Suggestions of future research related to improving the performance of the ESPR rnicroarchitecture and experimental deployment of ESP are discussed
    corecore